Unsupervised Learning by Examples: On-line versus Off-line
نویسندگان
چکیده
منابع مشابه
Unsupervised learning by examples: On-line versus off-line.
We study both on-line and oo-line learning in the following unsupervised learning scheme: p patterns are sampled independently from a distribution on the N-sphere with a single symmetry breaking orientation. Exact results are obtained in the limit p ! 1 and N ! 1 with nite ratio p=N. One nds that for smooth pattern distributions, the asymptotic behavior of the optimal oo-line and on-line learni...
متن کاملOff{line Learning from Clustered Input Examples
We analyze the generalization ability of a simple perceptron acting on a struc-tured input distribution for the simple case of two clusters of input data and a linearly separable rule. The generalization ability computed for three learning scenarios: maximal stability, Gibbs, and optimal learning, is found to improve with the separation between the clusters, and is bounded from below by the res...
متن کامل} Specialization Processes in On{line Unsupervised Learning Specialization Processes in On{line Unsupervised Learning
From the recent analysis of supervised learning by on{line gradient descent in multilayered neural networks it is known that the necessary process of student specialization can be delayed signiicantly. We demonstrate that this phenomenon also occurs in various models of unsupervised learning. A solvable model of competitive learning is presented, which identiies prototype vectors suitable for t...
متن کاملOff-Line Learning
The core issue of interactive design is to search for a point in a usually large design space. Previous studies have been looking into searching strategies within the interaction. In this report, we consider how previous records of interactions can help future searching. Recall that an interaction can be considered as a sequence of transitions of states. A state is described by the current samp...
متن کاملOn-line versus off-line learning in the linear perceptron: A comparative study.
The spherical perceptron with N inputs and a linear output does not present optimal generalization if trained by minimization of the standard quadratic cost function E = 1 2 N X =1 (b ? h) 2 ; where b and h are the outputs from the rule (teacher) and hypothesis (student) networks for the example and there are N examples. We derive an optimal algorithm for on-line learning of examples which outp...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Physical Review Letters
سال: 1996
ISSN: 0031-9007,1079-7114
DOI: 10.1103/physrevlett.76.2188